AI and the Future of Digital Public Squares
o1 Pro.iconBelow is a summary and commentary of the above paper.
Summary
This paper examines the impact of the recent emergence of the Internet, particularly large-scale language models (LLMs), on "digital versions of the public squares" (digital public squares). In the past, online political and social discussions have fostered a "public sphere" where ordinary users can freely exchange opinions with each other, but there have been many challenges to its quality and health. And while new technologies such as dialogue support, content coordination, and user authentication through LLMs can create a more inclusive and constructive online public space, they also risk deepening divisions. This paper summarizes the possibilities and risks of LLM for online dialogue and democratic processes around four application areas (collective dialogue systems, bridging systems, community-driven moderation, and humanity proof systems), and suggests future research and policy issues. [Details
There are tools such as Polis (Polis) that obtain feedback from a large group of participants online to extract points of agreement and diverse opinion distributions. They can be a means to improve the "quality" of dialogue and provide useful input to policy makers, but at present, the operational costs, expertise, facilitation effort, and data analysis burden are significant.
While LLMs can reduce effort by helping to summarize statements, translate language, generate content for participant education, and supplemental estimation of voting results, care must be taken to avoid false summaries and bias caused by LLMs.
Bridging" is a technique that encourages healthy dialogue by finding common ground between people with different values and political positions.
Traditional social networking sites prioritize "engagement maximization" and have a structural problem where extreme discourse tends to spread.
Bridging techniques (e.g., X's Community Notebook) may mitigate conflicts by prioritizing the display of content that is rated "useful" by both users of different positions. LLM is used to analyze the nuances of content and user behavior, facilitating the implementation of algorithms that respect diverse perspectives, but the algorithms may conversely make users suspicious or content mediocre.
[Community-Driven Moderation:
In online communities, volunteer moderators control harmful behavior and spam, but the workload is increasing, causing wear and tear and conflict.
LLM can automate content categorization, rule-making support, advance warning and educational feedback to participants, and contribute to moderation load reduction.
On the other hand, there is concern that if moderators rely too heavily on LLMs, the unique context of the community may be overlooked and human judgment may be compromised. In addition, the design must be flexible enough to accommodate community culture and diverse needs.
As bots and AI agents continue to disguise themselves, mechanisms to confirm that "humans" are truly participating will become increasingly important.
Research is underway to use government-issued IDs, biometrics, and zero-knowledge proof to guarantee online identity, but concerns about privacy, discrimination, data leaks, and surveillance risks make it difficult to strike a balance. In the LLM era, bots will be easier to create and even more difficult to determine, requiring new technical solutions and regulatory developments that both ensure privacy and improve reliability.
Summary
This paper demonstrates the new opportunities and potential dangers that LLM and related technologies offer for a "healthy digital public sphere".
LLMs can upgrade large scale dialogue systems and lower the bar for participation with summaries, multilingual support, etc.
Bridging strategies can ease conflicts and find common ground
Enhanced community-driven moderation can foster trust among participants
Proof of humanity can prevent bot manipulation and create an environment of honest dialogue.
However, these have risks such as privacy violations, bias/misinformation, bot abuse, and legal and ethical issues, and require careful design, social consensus, and policy frameworks.
The paper emphasizes that collaboration among researchers, engineers, policy makers, and civil society, as well as practices that emphasize transparency, accountability, and diversity, are essential to solving these cross-disciplinary challenges.
---